Provable Dictionary Learning via Column Signatures

نویسندگان

  • Sanjeev Arora
  • Aditya Bhaskara
  • Rong Ge
  • Tengyu Ma
چکیده

In dictionary learning, also known as sparse coding, we are given samples of the form y = Ax where x ∈ R is an unknown random sparse vector and A is an unknown dictionary matrix in Rn×m (usually m > n, which is the overcomplete case). The goal is to learn A and x. This problem has been studied in neuroscience, machine learning, vision, and image processing. In practice it is solved by heuristic algorithms and provable algorithms seemed hard to find. Recently, provable algorithms were found that work if the unknown feature vector x is √ nsparse or even sparser. [SWW12] did this for dictionaries where m = n; [AGM13] gave an algorithm for overcomplete (m > n) and incoherent matrices A; and [AAN13] handled a similar case but with somewhat weaker guarantees. This raised the problem of designing provable algorithms that allow sparsity √ n in the hidden vector x. The current paper designs algorithms that allow sparsity up to n/poly(log n). It works for a class of matrices where features are individually recoverable, a new notion identified in this paper that may motivate further work. The algorithm runs in quasipolynomial time because it uses limited enumeration. ∗Princeton University, Computer Science Department and Center for Computational Intractability. Email: [email protected]. This work is supported by the NSF grants CCF-0832797, CCF-1117309, CCF-1302518, DMS-1317308, and Simons Investigator Grant. †Google Research NYC. Email: [email protected]. Part of this work was done while the author was a Postdoc at EPFL, Switzerland. ‡Microsoft Research. Email: [email protected]. Part of this work was done while the author was a graduate student at Princeton University and was supported in part by NSF grants CCF-0832797, CCF-1117309, CCF-1302518, DMS-1317308, and Simons Investigator Grant. §Princeton University, Computer Science Department and Center for Computational Intractability. Email: [email protected]. This work is supported by the NSF grants CCF-0832797, CCF-1117309, CCF-1302518, DMS-1317308, and Simons Investigator Grant.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Provable Approach for Double-Sparse Coding

Sparse coding is a crucial subroutine in algorithms for various signal processing, deep learning, and other machine learning applications. The central goal is to learn an overcomplete dictionary that can sparsely represent a given dataset. However, storage, transmission, and processing of the learned dictionary can be untenably high if the data dimension is high. In this paper, we consider the ...

متن کامل

An Improved Analysis of the ER-SpUD Dictionary

In dictionary learning we observe Y = AX +E for some Y ∈ Rn×p, A ∈ Rm×n, and X ∈ Rm×p, where p ≥ max{n,m}, and typicallym ≥ n. The matrix Y is observed, and A,X,E are unknown. Here E is a “noise” matrix of small norm, and X is column-wise sparse. The matrix A is referred to as a dictionary, and its columns as atoms. Then, given some small number p of samples, i.e. columns of Y , the goal is to ...

متن کامل

An Improved Analysis of the ER-SpUD Dictionary Learning Algorithm

In dictionary learning we observe Y = AX + E for some Y ∈ Rn×p, A ∈ Rm×n, and X ∈ Rm×p, where p ≥ max{n,m}, and typically m ≥ n. The matrix Y is observed, and A,X,E are unknown. Here E is a “noise” matrix of small norm, and X is column-wise sparse. The matrix A is referred to as a dictionary, and its columns as atoms. Then, given some small number p of samples, i.e. columns of Y , the goal is t...

متن کامل

Provably Accurate Double-Sparse Coding

Sparse coding is a crucial subroutine in algorithms for various signal processing, deep learning, and other machine learning applications. The central goal is to learn an overcomplete dictionary that can sparsely represent a given dataset. However, storage, transmission, and processing of the learned dictionary can be untenably high if the data dimension is high. In this paper, we consider the ...

متن کامل

An Efficient Dictionary Based Robust PCA via Sketching

In this paper, we examine the problem of locating outliers from a large number of inliers with the particular interest when the outliers have known basis. By a convex formulation of demixing, we provide provable guarantees for exact recovery of the space spanned by the inliers and the supports of the outlier columns, even when the rank of inliers is high and the number of outliers is a constant...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014